Goto

Collaborating Authors

 use disorder


Application of CARE-SD text classifier tools to assess distribution of stigmatizing and doubt-marking language features in EHR

Walker, Drew, Love, Jennifer, Rajwal, Swati, Walker, Isabel C, Cooper, Hannah LF, Sarker, Abeed, Livingston, Melvin III

arXiv.org Artificial Intelligence

Introduction: Electronic health records (EHR) are a critical medium through which patient stigmatization is perpetuated among healthcare teams. Methods: We identified linguistic features of doubt markers and stigmatizing labels in MIMIC-III EHR via expanded lexicon matching and supervised learning classifiers. Predictors of rates of linguistic features were assessed using Poisson regression models. Results: We found higher rates of stigmatizing labels per chart among patients who were Black or African American (RR: 1.16), patients with Medicare/Medicaid or government-run insurance (RR: 2.46), self-pay (RR: 2.12), and patients with a variety of stigmatizing disease and mental health conditions. Patterns among doubt markers were similar, though male patients had higher rates of doubt markers (RR: 1.25). We found increased stigmatizing labels used by nurses (RR: 1.40), and social workers (RR: 2.25), with similar patterns of doubt markers. Discussion: Stigmatizing language occurred at higher rates among historically stigmatized patients, perpetuated by multiple provider types.


Large-Scale Analysis of Online Questions Related to Opioid Use Disorder on Reddit

Laud, Tanmay, Kacha-Ochana, Akadia, Sumner, Steven A., Krishnasamy, Vikram, Law, Royal, Schieber, Lyna, De Choudhury, Munmun, ElSherief, Mai

arXiv.org Artificial Intelligence

Opioid use disorder (OUD) is a leading health problem that affects individual well-being as well as general public health. Due to a variety of reasons, including the stigma faced by people using opioids, online communities for recovery and support were formed on different social media platforms. In these communities, people share their experiences and solicit information by asking questions to learn about opioid use and recovery. However, these communities do not always contain clinically verified information. In this paper, we study natural language questions asked in the context of OUD-related discourse on Reddit. We adopt transformer-based question detection along with hierarchical clustering across 19 subreddits to identify six coarse-grained categories and 69 fine-grained categories of OUD-related questions. Our analysis uncovers ten areas of information seeking from Reddit users in the context of OUD: drug sales, specific drug-related questions, OUD treatment, drug uses, side effects, withdrawal, lifestyle, drug testing, pain management and others, during the study period of 2018-2021. Our work provides a major step in improving the understanding of OUD-related questions people ask unobtrusively on Reddit. We finally discuss technological interventions and public health harm reduction techniques based on the topics of these questions.


Socially Constructed Treatment Plans: Analyzing Online Peer Interactions to Understand How Patients Navigate Complex Medical Conditions

Basak, Madhusudan, Sharif, Omar, Hulsey, Jessica, Saunders, Elizabeth C., Goodman, Daisy J., Archibald, Luke J., Preum, Sarah M.

arXiv.org Artificial Intelligence

When faced with complex and uncertain medical conditions (e.g., cancer, mental health conditions, recovery from substance dependency), millions of patients seek online peer support. In this study, we leverage content analysis of online discourse and ethnographic studies with clinicians and patient representatives to characterize how treatment plans for complex conditions are "socially constructed." Specifically, we ground online conversation on medication-assisted recovery treatment to medication guidelines and subsequently surface when and why people deviate from the clinical guidelines. We characterize the implications and effectiveness of socially constructed treatment plans through in-depth interviews with clinical experts. Finally, given the enthusiasm around AI-powered solutions for patient communication, we investigate whether and how socially constructed treatment-related knowledge is reflected in a state-of-the-art large language model (LLM). Leveraging a novel mixed-method approach, this study highlights critical research directions for patient-centered communication in online health communities.


First-of-its-kind implant detects and treats opioid overdoses

Popular Science

Since 1999, the opioid epidemic has killed around 645,000 people in America--a number that would no doubt be even higher were it not for naloxone, an opioid antagonist that can effectively reverse the effects of an overdose. However, time is critical: if naloxone is not administered promptly, the victim's chances of survival diminish rapidly. In a paper published August 14 in Device, a team of researchers describe a device designed to detect the signs of an overdose and automatically deliver a dose of naloxone in as little as 10 seconds. The device–which researchers describe as a "robotic first responder"–is named the "implantable system for opioid safety" (iSOS). It's implanted under the user's skin, in the same way as a heart loop recorder.


Leveraging Large Language Models to Extract Information on Substance Use Disorder Severity from Clinical Notes: A Zero-shot Learning Approach

Mahbub, Maria, Dams, Gregory M., Srinivasan, Sudarshan, Rizy, Caitlin, Danciu, Ioana, Trafton, Jodie, Knight, Kathryn

arXiv.org Artificial Intelligence

Substance use disorder (SUD) poses a major concern due to its detrimental effects on health and society. SUD identification and treatment depend on a variety of factors such as severity, co-determinants (e.g., withdrawal symptoms), and social determinants of health. Existing diagnostic coding systems used by American insurance providers, like the International Classification of Diseases (ICD-10), lack granularity for certain diagnoses, but clinicians will add this granularity (as that found within the Diagnostic and Statistical Manual of Mental Disorders classification or DSM-5) as supplemental unstructured text in clinical notes. Traditional natural language processing (NLP) methods face limitations in accurately parsing such diverse clinical language. Large Language Models (LLMs) offer promise in overcoming these challenges by adapting to diverse language patterns. This study investigates the application of LLMs for extracting severity-related information for various SUD diagnoses from clinical notes. We propose a workflow employing zero-shot learning of LLMs with carefully crafted prompts and post-processing techniques. Through experimentation with Flan-T5, an open-source LLM, we demonstrate its superior recall compared to the rule-based approach. Focusing on 11 categories of SUD diagnoses, we show the effectiveness of LLMs in extracting severity information, contributing to improved risk assessment and treatment planning for SUD patients.


Question-Answering System Extracts Information on Injection Drug Use from Clinical Notes

Mahbub, Maria, Goethert, Ian, Danciu, Ioana, Knight, Kathryn, Srinivasan, Sudarshan, Tamang, Suzanne, Rozenberg-Ben-Dror, Karine, Solares, Hugo, Martins, Susana, Trafton, Jodie, Begoli, Edmon, Peterson, Gregory

arXiv.org Artificial Intelligence

Background: Injection drug use (IDU) is a dangerous health behavior that increases mortality and morbidity. Identifying IDU early and initiating harm reduction interventions can benefit individuals at risk. However, extracting IDU behaviors from patients' electronic health records (EHR) is difficult because there is no International Classification of Disease (ICD) code and the only place IDU information can be indicated is unstructured free-text clinical notes. Although natural language processing can efficiently extract this information from unstructured data, there are no validated tools. Methods: To address this gap in clinical information, we design and demonstrate a question-answering (QA) framework to extract information on IDU from clinical notes. Our framework involves two main steps: (1) generating a gold-standard QA dataset and (2) developing and testing the QA model. We utilize 2323 clinical notes of 1145 patients sourced from the VA Corporate Data Warehouse to construct the gold-standard dataset for developing and evaluating the QA model. We also demonstrate the QA model's ability to extract IDU-related information on temporally out-of-distribution data. Results: Here we show that for a strict match between gold-standard and predicted answers, the QA model achieves 51.65% F1 score. For a relaxed match between the gold-standard and predicted answers, the QA model obtains 78.03% F1 score, along with 85.38% Precision and 79.02% Recall scores. Moreover, the QA model demonstrates consistent performance when subjected to temporally out-of-distribution data. Conclusions: Our study introduces a QA framework designed to extract IDU information from clinical notes, aiming to enhance the accurate and efficient detection of people who inject drugs, extract relevant information, and ultimately facilitate informed patient care.


Characterizing Information Seeking Events in Health-Related Social Discourse

Sharif, Omar, Basak, Madhusudan, Parvin, Tanzia, Scharfstein, Ava, Bradham, Alphonso, Borodovsky, Jacob T., Lord, Sarah E., Preum, Sarah M.

arXiv.org Artificial Intelligence

Social media sites have become a popular platform for individuals to seek and share health information. Despite the progress in natural language processing for social media mining, a gap remains in analyzing health-related texts on social discourse in the context of events. Event-driven analysis can offer insights into different facets of healthcare at an individual and collective level, including treatment options, misconceptions, knowledge gaps, etc. This paper presents a paradigm to characterize health-related information-seeking in social discourse through the lens of events. Events here are board categories defined with domain experts that capture the trajectory of the treatment/medication. To illustrate the value of this approach, we analyze Reddit posts regarding medications for Opioid Use Disorder (OUD), a critical global health concern. To the best of our knowledge, this is the first attempt to define event categories for characterizing information-seeking in OUD social discourse. Guided by domain experts, we develop TREAT-ISE, a novel multilabel treatment information-seeking event dataset to analyze online discourse on an event-based framework. This dataset contains Reddit posts on information-seeking events related to recovery from OUD, where each post is annotated based on the type of events. We also establish a strong performance benchmark (77.4% F1 score) for the task by employing several machine learning and deep learning classifiers. Finally, we thoroughly investigate the performance and errors of ChatGPT on this task, providing valuable insights into the LLM's capabilities and ongoing characterization efforts.


Artificial intelligence may predict opioid use disorder, research shows

#artificialintelligence

The machine learning model analyzed health data from nearly 700,000 patients in Alberta who received opioid prescriptions between 2014 and 2018, cross-referencing 62 factors such as the number of doctor and emergency room visits, diagnoses, and sociodemographic information. Researchers found the top risk factors for opioid use disorder included frequency of opioid use, high dosage, and a history of other substance use disorders. The model predicted high-risk patients with an accuracy of 86 per cent when it was validated against a new sample of 316,000 patients from 2019. According to the study, the findings suggest early detection of opioid use disorder is possible with a data-driven approach and may provide timely clinical intervention and policy changes to help curb the current crisis. "It's important that the model's prediction of whether someone will develop opioid use disorder is interpreted as a risk instead of a label," said first author Yang Liu, a post-doctoral fellow in psychiatry, in the release.


Artificial intelligence-based tool may help diagnose opioid addiction earlier

#artificialintelligence

Researchers have used machine learning, a type of artificial intelligence, to develop a prediction model for the early diagnosis of opioid use disorder. The advance is described in Pharmacology Research & Perspectives. The model was generated from information in a commercial claim database from 2006 through 2018 of 10 million medical insurance claims from 550,000 patient records. It relied on data such as demographics, chronic conditions, diagnoses and procedures, and medication prescriptions. The tool led to a diagnosis of opioid use disorder that was on average 14.4 months earlier than it was diagnosed clinically.


Identifying Risk of Opioid Use Disorder for Patients Taking Opioid Medications with Deep Learning

Dong, Xinyu, Deng, Jianyuan, Rashidian, Sina, Abell-Hart, Kayley, Hou, Wei, Rosenthal, Richard N, Saltz, Mary, Saltz, Joel, Wang, Fusheng

arXiv.org Machine Learning

The United States is experiencing an opioid epidemic, and there were more than 10 million opioid misusers aged 12 or older each year. Identifying patients at high risk of Opioid Use Disorder (OUD) can help to make early clinical interventions to reduce the risk of OUD. Our goal is to predict OUD patients among opioid prescription users through analyzing electronic health records with machine learning and deep learning methods. This will help us to better understand the diagnoses of OUD, providing new insights on opioid epidemic. Electronic health records of patients who have been prescribed with medications containing active opioid ingredients were extracted from Cerner Health Facts database between January 1, 2008 and December 31, 2017. Long Short-Term Memory (LSTM) models were applied to predict opioid use disorder risk in the future based on recent five encounters, and compared to Logistic Regression, Random Forest, Decision Tree and Dense Neural Network. Prediction performance was assessed using F-1 score, precision, recall, and AUROC. Our temporal deep learning model provided promising prediction results which outperformed other methods, with a F1 score of 0.8023 and AUCROC of 0.9369. The model can identify OUD related medications and vital signs as important features for the prediction. LSTM based temporal deep learning model is effective on predicting opioid use disorder using a patient past history of electronic health records, with minimal domain knowledge. It has potential to improve clinical decision support for early intervention and prevention to combat the opioid epidemic.